首页> 外文OA文献 >Incremental learning of temporally-coherent Gaussian mixture models
【2h】

Incremental learning of temporally-coherent Gaussian mixture models

机译:时间相干高斯混合模型的增量学习

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In this paper we address the problem of learning Gaussian Mixture Models (GMMs) incrementally. Unlike previous approaches which universally assume that new data comes in blocks representable by GMMs which are then merged with the current model estimate, our method works for the case when novel data points arrive one- by-one, while requiring little additional memory. We keep only two GMMs in the memory and no historical data. The current fit is updated with the assumption that the number of components is fixed which is increased (or reduced) when enough evidence for a new component is seen. This is deducedfrom the change from the oldest fit of the same complexity, termed the Historical GMM, the concept of which is central to our method. The performance of the proposed method is demonstrated qualitatively and quantitatively on several synthetic data sets and video sequences of faces acquired in realistic imaging conditions.
机译:在本文中,我们逐步解决了学习高斯混合模型(GMM)的问题。与以前的方法普遍假定新数据进入由GMM表示的块,然后将其与当前模型估计值合并的方法不同,我们的方法适用于新颖数据点一一到达的情况,而几乎不需要额外的内存。我们在内存中仅保留两个GMM,而没有历史数据。当前拟合更新的前提是组件的数量是固定的,当看到新组件的足够证据时,组件的数量会增加(或减少)。这是从具有相同复杂度的最旧拟合的变化中得出的,称为历史GMM,其概念对于我们的方法至关重要。在实际成像条件下获得的几个合成数据集和面部视频序列上,定性和定量地证明了该方法的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号